临床表型可以从患者记录中自动提取临床状况,这可能对全球医生和诊所有益。但是,当前的最新模型主要适用于用英语编写的临床笔记。因此,我们研究了跨语化知识转移策略,以针对不使用英语并且有少量可用数据的诊所执行此任务。我们评估了希腊和西班牙诊所的这些策略,利用来自心脏病学,肿瘤学和ICU等不同临床领域的临床笔记。我们的结果揭示了两种策略,这些策略优于最先进的方法:基于翻译的方法,结合了域的编码器和跨语性编码器以及适配器。我们发现,这些策略在对稀有表型进行分类方面表现特别好,我们建议在哪种情况下更喜欢哪种方法。我们的结果表明,使用多语言数据总体可以改善临床表型模型,并可以补偿数据稀疏性。
translated by 谷歌翻译
Modern speech recognition systems exhibits rapid performance degradation under domain shift. This issue is especially prevalent in data-scarce settings, such as low-resource languages, where diversity of training data is limited. In this work we propose M2DS2, a simple and sample-efficient finetuning strategy for large pretrained speech models, based on mixed source and target domain self-supervision. We find that including source domain self-supervision stabilizes training and avoids mode collapse of the latent representations. For evaluation, we collect HParl, a $120$ hour speech corpus for Greek, consisting of plenary sessions in the Greek Parliament. We merge HParl with two popular Greek corpora to create GREC-MD, a test-bed for multi-domain evaluation of Greek ASR systems. In our experiments we find that, while other Unsupervised Domain Adaptation baselines fail in this resource-constrained environment, M2DS2 yields significant improvements for cross-domain adaptation, even when a only a few hours of in-domain audio are available. When we relax the problem in a weakly supervised setting, we find that independent adaptation for audio using M2DS2 and language using simple LM augmentation techniques is particularly effective, yielding word error rates comparable to the fully supervised baselines.
translated by 谷歌翻译
Transformers are becoming increasingly popular due to their superior performance over conventional convolutional neural networks(CNNs). However, transformers usually require a much larger amount of memory to train than CNNs, which prevents their application in many low resource settings. Local learning, which divides the network into several distinct modules and trains them individually, is a promising alternative to the end-to-end (E2E) training approach to reduce the amount of memory for training and to increase parallelism. This paper is the first to apply Local Learning on transformers for this purpose. The standard CNN-based local learning method, InfoPro [32], reconstructs the input images for each module in a CNN. However, reconstructing the entire image does not generalize well. In this paper, we propose a new mechanism for each local module, where instead of reconstructing the entire image, we reconstruct its input features, generated from previous modules. We evaluate our approach on 4 commonly used datasets and 3 commonly used decoder structures on Swin-Tiny. The experiments show that our approach outperforms InfoPro-Transformer, the InfoPro with Transfomer backbone we introduced, by at up to 0.58% on CIFAR-10, CIFAR-100, STL-10 and SVHN datasets, while using up to 12% less memory. Compared to the E2E approach, we require 36% less GPU memory when the network is divided into 2 modules and 45% less GPU memory when the network is divided into 4 modules.
translated by 谷歌翻译
Dense prediction tasks such as segmentation and detection of pathological entities hold crucial clinical value in the digital pathology workflow. However, obtaining dense annotations on large cohorts is usually tedious and expensive. Contrastive learning (CL) is thus often employed to leverage large volumes of unlabeled data to pre-train the backbone network. To boost CL for dense prediction, some studies have proposed variations of dense matching objectives in pre-training. However, our analysis shows that employing existing dense matching strategies on histopathology images enforces invariance among incorrect pairs of dense features and, thus, is imprecise. To address this, we propose a precise location-based matching mechanism that utilizes the overlapping information between geometric transformations to precisely match regions in two augmentations. Extensive experiments on two pretraining datasets (TCGA-BRCA, NCT-CRC-HE) and three downstream datasets (GlaS, CRAG, BCSS) highlight the superiority of our method in semantic and instance segmentation tasks. Our method outperforms previous dense matching methods by up to 7.2 % in average precision for detection and 5.6 % in average precision for instance segmentation tasks. Additionally, by using our matching mechanism in the three popular contrastive learning frameworks, MoCo-v2, VICRegL and ConCL, the average precision in detection is improved by 0.7 % to 5.2 % and the average precision in segmentation is improved by 0.7 % to 4.0 %, demonstrating its generalizability.
translated by 谷歌翻译
人工智能(AI)已被广泛应用于药物发现中,其主要任务是分子财产预测。尽管分子表示学习中AI技术的繁荣,但尚未仔细检查分子性质预测的一些关键方面。在这项研究中,我们对三个代表性模型,即随机森林,莫尔伯特和格罗弗进行了系统比较,该模型分别利用了三个主要的分子表示,扩展连接的指纹,微笑的字符串和分子图。值得注意的是,莫尔伯特(Molbert)和格罗弗(Grover)以自我监督的方式在大规模的无标记分子库中进行了预定。除了常用的分子基准数据集外,我们还组装了一套与阿片类药物相关的数据集进行下游预测评估。我们首先对标签分布和结构分析进行了数据集分析;我们还检查了阿片类药物相关数据集中的活动悬崖问题。然后,我们培训了4,320个预测模型,并评估了学习表示的有用性。此外,我们通过研究统计测试,评估指标和任务设置的效果来探索模型评估。最后,我们将化学空间的概括分解为施加间和支柱内的概括,并测量了预测性能,以评估两种设置下模型的普遍性。通过采取这种喘息,我们反映了分子财产预测的基本关键方面,希望在该领域带来更好的AI技术的意识。
translated by 谷歌翻译
卵巢癌是最致命的妇科恶性肿瘤。该疾病在早期阶段最常是无症状的,其诊断依赖于经阴道超声图像的专家评估。超声是表征附加质量的一线成像方式,它需要大量的专业知识,其分析是主观的和劳动的,因此易于误差。因此,在临床实践中需要进行自动化的过程,以促进和标准化扫描评估。使用监督的学习,我们证明了附加质量的分割是可能的,但是,患病率和标签不平衡限制了代表性不足的类别的性能。为了减轻这种情况,我们应用了一种新颖的病理学数据合成器。我们通过使用Poisson图像编辑将较少常见的质量整合到其他样品中,从而创建及其相应的地面真实分割的合成医学图像。我们的方法在所有班级中都取得了最佳性能,包括与NNU-NET基线方法相比,提高了多达8%。
translated by 谷歌翻译
强大的电力系统的长期计划需要了解不断变化的需求模式。电力需求对天气敏感。因此,引入间歇性可再生能源的供应方面变化与可变需求并列,将在网格计划过程中引入其他挑战。通过了解美国温度的空间和时间变化,可以分开需求对自然变异性和与气候变化相关的影响的需求的响应,尤其是因为尚不清楚由于前一个因素所产生的影响。通过该项目,我们旨在通过开发机器和深入学习“背面销售”模型来更好地支持电力系统的技术和政策开发过程,以重建多年需求记录并研究温度的自然变异性及其对需求的影响。
translated by 谷歌翻译
医学成像中各种各样的分布和分布数据使通用异常检测成为一项艰巨的任务。最近,已经开发了许多自我监督的方法,这些方法是对健康数据的端到端模型,并具有合成异常的增强。但是,很难比较这些方法,因为尚不清楚绩效的收益是从任务本身还是围绕其培训管道来进行的。也很难评估一项任务是否可以很好地通用通用异常检测,因为它们通常仅在有限的异常范围内进行测试。为了协助这一点,我们开发了NOOD,该框架适应NNU-NET,以比较自我监督的异常定位方法。通过将综合,自我监督的任务隔离在其余培训过程中,我们对任务进行了更忠实的比较,同时还可以快速简便地评估给定数据集的工作流程。使用此功能,我们实施了当前的最新任务,并在具有挑战性的X射线数据集上对其进行了评估。
translated by 谷歌翻译
无监督的异常检测和定位是至关重要的任务,因为不可能收集和标记所有可能的异常。许多研究强调了整合本地和全球信息以实现异常分割的重要性。为此,对变压器的兴趣越来越大,它允许对远程内容相互作用进行建模。但是,对于大多数图像量表而言,通过自我注意力的全球互动通常太贵了。在这项研究中,我们介绍了Haloae,这是第一个基于Halonet的局部2D版本的自动编码器。使用Haloae,我们创建了一个混合模型,该模型结合了卷积和局部2D块的自我发项层,并通过单个模型共同执行异常检测和分割。我们在MVTEC数据集上取得了竞争成果,表明结合变压器的视觉模型可以受益于自我发挥操作的本地计算,并为其他应用铺平道路。
translated by 谷歌翻译
由于深度学习的出现,图像数据的最新技术对单眼3D面对重建的重建取得了令人印象深刻的进步。但是,它主要集中于来自单个RGB图像的输入,忽略以下重要因素:a)如今,感兴趣的绝大多数面部图像数据不是来自单个图像,而是来自包含丰富动态信息的视频。 。 b)此外,这些视频通常以某种形式的口头交流捕捉个人(公众对话,电视会议,视听人类计算机的互动,访谈,电影中的独白/对话等)。当在此类视频中应用现有的3D面部重建方法时,重建口腔区域的形状和运动中的伪影通常很严重,因为它们与语音音频不太匹配。为了克服上述局限性,我们提出了3D口表达的视觉语音感知重建的第一种方法。我们通过提出“口语”损失来做到这一点,该损失指导拟合过程,从而使3D重建的说话头的感知与原始录像相似。我们证明,有趣的是,与传统的具有里程碑意义的损失,甚至直接3D监督相比,口头损失更适合3D重建嘴运动。此外,设计的方法不依赖于任何文本转录或相应的音频,因此非常适合在未标记的数据集中培训。我们通过对三个大规模数据集的详尽客观评估以及通过两种基于网络的用户研究进行主观评估来验证方法的效率。
translated by 谷歌翻译